Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 22
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Biomech ; 167: 112074, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38614021

RESUMO

Suppression of noise from recorded signals is a critically important data processing step for biomechanical analyses. While a wide variety of filtering or smoothing spline methods are available, the majority of these are not well suited for the analysis of signals with rapidly changing derivatives such as the processing of motion data for impact-like events. This is because commonly used low-pass filtering approaches or smoothing splines typically assume a single fixed cut-off frequency or regularization penalty which fails to describe rapid changes in the underlying function. To overcome these limitations we examine a class of adaptive penalized splines (APS) that extend commonly used penalized spline smoothers by inferring temporal adaptations in regularization penalty from observed data. Three variations of APS are examined in which temporal variation of spline penalization is described via either a series of independent random variables, an autoregressive process or a smooth cubic spline. Comparing the performance of APS on simulated datasets is promising with APS reducing RMSE by 48%-183% compared to a widely used Butterworth filtering approach. When inferring acceleration from noisy measurements describing the position of a pendulum impacting a barrier we observe between a 13% (independent variables) to 28% (spline) reduction in RMSE when compared to a 4th order Butterworth filter with optimally selected cut-off frequency. In addition to considerable improvement in RMSE, APS can provide estimates of uncertainty for fitted curves and generated quantities such as peak accelerations or durations of stationary periods. As a result, we suggest that researchers should consider the use of APS if features such as impact peaks, rates of loading, or periods of negligible acceleration are of interest.


Assuntos
Aceleração , Fenômenos Biomecânicos
2.
Theor Appl Genet ; 137(3): 64, 2024 Mar 02.
Artigo em Inglês | MEDLINE | ID: mdl-38430392

RESUMO

KEY MESSAGE: An improved estimator of genomic relatedness using low-depth high-throughput sequencing data for autopolyploids is developed. Its outputs strongly correlate with SNP array-based estimates and are available in the package GUSrelate. High-throughput sequencing (HTS) methods have reduced sequencing costs and resources compared to array-based tools, facilitating the investigation of many non-model polyploid species. One important quantity that can be computed from HTS data is the genetic relatedness between all individuals in a population. However, HTS data are often messy, with multiple sources of errors (i.e. sequencing errors or missing parental alleles) which, if not accounted for, can lead to bias in genomic relatedness estimates. We derive a new estimator for constructing a genomic relationship matrix (GRM) from HTS data for autopolyploid species that accounts for errors associated with low sequencing depths, implemented in the R package GUSrelate. Simulations revealed that GUSrelate performed similarly to existing GRM methods at high depth but reduced bias in self-relatedness estimates when the sequencing depth was low. Using a panel consisting of 351 tetraploid potato genotypes, we found that GUSrelate produced GRMs from genotyping-by-sequencing (GBS) data that were highly correlated with a GRM computed from SNP array data, and less biased than existing methods when benchmarking against the array-based GRM estimates. GUSrelate provides researchers with a tool to reliably construct GRMs from low-depth HTS data.


Assuntos
Técnicas de Genotipagem , Polimorfismo de Nucleotídeo Único , Humanos , Técnicas de Genotipagem/métodos , Análise de Sequência de DNA/métodos , Sequenciamento de Nucleotídeos em Larga Escala/métodos , Alelos
3.
Skelet Muscle ; 13(1): 7, 2023 04 22.
Artigo em Inglês | MEDLINE | ID: mdl-37087439

RESUMO

BACKGROUND: The functional and metabolic properties of skeletal muscles are partly a function of the spatial arrangement of fibers across the muscle belly. Many muscles feature a non-uniform spatial pattern of fiber types, and alterations to the arrangement can reflect age or disease and correlate with changes in muscle mass and strength. Despite the significance of this event, descriptions of spatial fiber-type distributions across a muscle section are mainly provided qualitatively, by eye. Whilst several quantitative methods have been proposed, difficulties in implementation have meant that robust statistical analysis of fiber type distributions has not yielded new insight into the biological processes that drive the age- or disease-related changes in fiber type distributions. METHODS: We review currently available approaches for analysis of data reporting fast/slow fiber type distributions on muscle sections before proposing a new method based on a generalized additive model. We compare current approaches with our new method by analysis of sections of three mouse soleus muscles that exhibit visibly different spatial fiber patterns, and we also apply our model to a dataset representing the fiber type proportions and distributions of the mouse tibialis anterior. RESULTS: We highlight how current methods can lead to differing interpretations when applied to the same dataset and demonstrate how our new method is the first to permit location-based estimation of fiber-type probabilities, in turn enabling useful graphical representation. CONCLUSIONS: We present an open-access online application that implements current methods as well as our new method and which aids the interpretation of a variety of statistical tools for the spatial analysis of muscle fiber distributions.


Assuntos
Fibras Musculares Esqueléticas , Doenças Musculares , Camundongos , Animais , Fibras Musculares Esqueléticas/fisiologia , Músculo Esquelético/metabolismo , Doenças Musculares/metabolismo
4.
Biometrics ; 79(4): 3803-3817, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-36654190

RESUMO

We consider estimator and model choice when estimating abundance from capture-recapture data. Our work is motivated by a mark-recapture distance sampling example, where model and estimator choice led to unexpectedly large disparities in the estimates. To understand these differences, we look at three estimation strategies (maximum likelihood estimation, conditional maximum likelihood estimation, and Bayesian estimation) for both binomial and Poisson models. We show that assuming the data have a binomial or multinomial distribution introduces implicit and unnoticed assumptions that are not addressed when fitting with maximum likelihood estimation. This can have an important effect in finite samples, particularly if our data arise from multiple populations. We relate these results to those of restricted maximum likelihood in linear mixed effects models.


Assuntos
Modelos Estatísticos , Densidade Demográfica , Teorema de Bayes , Modelos Lineares , Funções Verossimilhança
5.
J Biomech ; 141: 111158, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-35710465

RESUMO

Bayesian methods have recently been proposed to solve inverse kinematics problems for marker based motion capture. The objective is to find the posterior distribution, a probabilistic summary of our knowledge and corresponding uncertainty about the model parameters such as joint angles, segment angles, segment translations, and marker positions. To date, Bayesian inverse kinematics models have focused on a frame by frame solution, which if repeatedly applied gives estimates that are discontinuous in time. We propose to overcome this limitation for continuous, planar inverse kinematics problems via the use of finite basis representations to model latent kinematic quantities as smooth, continuous functions. Our generalised smoothing approach is able to accurately approximate the solution to planar inverse kinematics problems defined by simple systems of ordinary differential equations in addition to considerably more complex systems such as a planar analysis of human gait. Improvements in accuracy are considerable with a decrease in average RMSE of 0.025 rad observed when estimating ankle joint angle for a randomly selected running stride with the proposed generalised smoothing approach compared to previous time-independent approaches. In addition, the generalised smoothing approach is able to effectively estimate kinematic parameters in the presence of missing data along with derivatives of kinematic quantities without the need for prior filtering or gap-filling of data.


Assuntos
Marcha , Modelos Biológicos , Teorema de Bayes , Fenômenos Biomecânicos , Humanos , Amplitude de Movimento Articular
7.
J Biomech ; 126: 110597, 2021 09 20.
Artigo em Inglês | MEDLINE | ID: mdl-34274870

RESUMO

Bayesian inference has recently been identified as an approach for estimating a subjects' pose from noisy marker position data. Previous research suggests that Bayesian inference markedly reduces error for inverse kinematic problems relative to traditional least-squares approaches with estimators having reduced variance despite both least-squares and Bayesian estimators being unbiased. This result is surprising as Bayesian estimators are typically similar to least-squares approaches unless highly informative prior distributions are used. As a result the purpose of this work was to examine the sensitivity of Bayesian inverse kinematics solutions to the prior distribution. Our results highlight that Bayesian solutions to inverse kinematics are sensitive to the choice of prior and that the previously reported superior performance of Bayesian inference is likely due to an overly informative prior distribution which unrealistically uses knowledge of the true kinematic pose. When more realistic, 'weakly-informative' priors, which do not use the known kinematic pose are used then any improvements in estimator accuracy are minimal when compared to the traditional least squares approach. However, with appropriate priors, Bayesian inference can propagate uncertainties related to marker position to uncertainty in joint angles, a valuable contribution for kinematic analyses. When using Bayesian methods, we recommend researchers use weakly-informative priors and conduct a sensitivity analysis to highlight the effects of prior choice on analysis outcomes.


Assuntos
Análise dos Mínimos Quadrados , Teorema de Bayes , Fenômenos Biomecânicos , Humanos
8.
Stat Med ; 40(22): 4751-4763, 2021 09 30.
Artigo em Inglês | MEDLINE | ID: mdl-33990992

RESUMO

It is difficult to estimate sensitivity and specificity of diagnostic tests when there is no gold standard. Latent class models have been proposed as a potential solution as they provide estimates without the need for a gold standard. Using a motivating example of the evaluation of point of care tests for leptospirosis in Tanzania, we show how a realistic violation of assumptions underpinning the latent class model can lead directly to substantial bias in the estimates of the parameters of interest. In particular, we consider the robustness of estimates of sensitivity, specificity, and prevalence, to the presence of additional latent states when fitting a two-state latent class model. The violation is minor in the sense that it cannot be routinely detected with goodness-of-fit procedures, but is major with regard to the resulting bias.


Assuntos
Testes Diagnósticos de Rotina , Modelos Estatísticos , Teorema de Bayes , Viés , Humanos , Análise de Classes Latentes , Prevalência , Sensibilidade e Especificidade
9.
Biometrics ; 76(2): 392-402, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-31517386

RESUMO

A spatial open-population capture-recapture model is described that extends both the non-spatial open-population model of Schwarz and Arnason and the spatially explicit closed-population model of Borchers and Efford. The superpopulation of animals available for detection at some time during a study is conceived as a two-dimensional Poisson point process. Individual probabilities of birth and death follow the conventional open-population model. Movement between sampling times may be modeled with a dispersal kernel using a recursive Markovian algorithm. Observations arise from distance-dependent sampling at an array of detectors. As in the closed-population spatial model, the observed data likelihood relies on integration over the unknown animal locations; maximization of this likelihood yields estimates of the birth, death, movement, and detection parameters. The models were fitted to data from a live-trapping study of brushtail possums (Trichosurus vulpecula) in New Zealand. Simulations confirmed that spatial modeling can greatly reduce the bias of capture-recapture survival estimates and that there is a degree of robustness to misspecification of the dispersal kernel. An R package is available that includes various extensions.


Assuntos
Modelos Biológicos , Dinâmica Populacional/estatística & dados numéricos , Migração Animal , Animais , Animais Selvagens , Viés , Biometria , Simulação por Computador , Ecossistema , Comportamento de Retorno ao Território Vital , Funções Verossimilhança , Nova Zelândia , Distribuição de Poisson , Crescimento Demográfico , Tamanho da Amostra , Análise Espaço-Temporal , Trichosurus
11.
Ecology ; 99(7): 1547-1551, 2018 07.
Artigo em Inglês | MEDLINE | ID: mdl-29702727

RESUMO

N-mixture models provide an appealing alternative to mark-recapture models, in that they allow for estimation of detection probability and population size from count data, without requiring that individual animals be identified. There is, however, a cost to using the N-mixture models: inference is very sensitive to the model's assumptions. We consider the effects of three violations of assumptions that might reasonably be expected in practice: double counting, unmodeled variation in population size over time, and unmodeled variation in detection probability over time. These three examples show that small violations of assumptions can lead to large biases in estimation. The violations of assumptions we consider are not only small qualitatively, but are also small in the sense that they are unlikely to be detected using goodness-of-fit tests. In cases where reliable estimates of population size are needed, we encourage investigators to allocate resources to acquiring additional data, such as recaptures of marked individuals, for estimation of detection probabilities.


Assuntos
Modelos Estatísticos , Animais , Viés , Densidade Demográfica , Probabilidade
12.
Genetics ; 209(1): 65-76, 2018 05.
Artigo em Inglês | MEDLINE | ID: mdl-29487138

RESUMO

Next-generation sequencing is an efficient method that allows for substantially more markers than previous technologies, providing opportunities for building high-density genetic linkage maps, which facilitate the development of nonmodel species' genomic assemblies and the investigation of their genes. However, constructing genetic maps using data generated via high-throughput sequencing technology (e.g., genotyping-by-sequencing) is complicated by the presence of sequencing errors and genotyping errors resulting from missing parental alleles due to low sequencing depth. If unaccounted for, these errors lead to inflated genetic maps. In addition, map construction in many species is performed using full-sibling family populations derived from the outcrossing of two individuals, where unknown parental phase and varying segregation types further complicate construction. We present a new methodology for modeling low coverage sequencing data in the construction of genetic linkage maps using full-sibling populations of diploid species, implemented in a package called GUSMap. Our model is based on the Lander-Green hidden Markov model but extended to account for errors present in sequencing data. We were able to obtain accurate estimates of the recombination fractions and overall map distance using GUSMap, while most existing mapping packages produced inflated genetic maps in the presence of errors. Our results demonstrate the feasibility of using low coverage sequencing data to produce genetic maps without requiring extensive filtering of potentially erroneous genotypes, provided that the associated errors are correctly accounted for in the model.


Assuntos
Mapeamento Cromossômico , Cruzamentos Genéticos , Ligação Genética , Genética Populacional , Sequenciamento de Nucleotídeos em Larga Escala , Algoritmos , Alelos , Biologia Computacional/métodos , Simulação por Computador , Técnicas de Genotipagem , Cadeias de Markov , Modelos Genéticos , Polimorfismo de Nucleotídeo Único , Locos de Características Quantitativas , Análise de Sequência de DNA , Software
13.
Biometrics ; 74(2): 626-635, 2018 06.
Artigo em Inglês | MEDLINE | ID: mdl-28901008

RESUMO

The standard approach to fitting capture-recapture data collected in continuous time involves arbitrarily forcing the data into a series of distinct discrete capture sessions. We show how continuous-time models can be fitted as easily as discrete-time alternatives. The likelihood is factored so that efficient Markov chain Monte Carlo algorithms can be implemented for Bayesian estimation, available online in the R package ctime. We consider goodness-of-fit tests for behavior and heterogeneity effects as well as implementing models that allow for such effects.


Assuntos
Funções Verossimilhança , Modelos Estatísticos , Algoritmos , Teorema de Bayes , Cadeias de Markov , Método de Monte Carlo , Distribuição de Poisson , Fatores de Tempo
14.
Biometrics ; 74(1): 369-377, 2018 03.
Artigo em Inglês | MEDLINE | ID: mdl-28672424

RESUMO

N-mixture models describe count data replicated in time and across sites in terms of abundance N and detectability p. They are popular because they allow inference about N while controlling for factors that influence p without the need for marking animals. Using a capture-recapture perspective, we show that the loss of information that results from not marking animals is critical, making reliable statistical modeling of N and p problematic using just count data. One cannot reliably fit a model in which the detection probabilities are distinct among repeat visits as this model is overspecified. This makes uncontrolled variation in p problematic. By counter example, we show that even if p is constant after adjusting for covariate effects (the "constant p" assumption) scientifically plausible alternative models in which N (or its expectation) is non-identifiable or does not even exist as a parameter, lead to data that are practically indistinguishable from data generated under an N-mixture model. This is particularly the case for sparse data as is commonly seen in applications. We conclude that under the constant p assumption reliable inference is only possible for relative abundance in the absence of questionable and/or untestable assumptions or with better quality data than seen in typical applications. Relative abundance models for counts can be readily fitted using Poisson regression in standard software such as R and are sufficiently flexible to allow controlling for p through the use covariates while simultaneously modeling variation in relative abundance. If users require estimates of absolute abundance, they should collect auxiliary data that help with estimation of p.


Assuntos
Distribuição Animal , Modelos Estatísticos , Animais , Modelos Lineares , Densidade Demográfica , Dinâmica Populacional
15.
Biometrics ; 71(4): 1070-80, 2015 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26033530

RESUMO

Link et al. (2010, Biometrics 66, 178-185) define a general framework for analyzing capture-recapture data with potential misidentifications. In this framework, the observed vector of counts, y, is considered as a linear function of a vector of latent counts, x, such that y=Ax, with x assumed to follow a multinomial distribution conditional on the model parameters, θ. Bayesian methods are then applied by sampling from the joint posterior distribution of both x and θ. In particular, Link et al. (2010) propose a Metropolis-Hastings algorithm to sample from the full conditional distribution of x, where new proposals are generated by sequentially adding elements from a basis of the null space (kernel) of A. We consider this algorithm and show that using elements from a simple basis for the kernel of A may not produce an irreducible Markov chain. Instead, we require a Markov basis, as defined by Diaconis and Sturmfels (1998, The Annals of Statistics 26, 363-397). We illustrate the importance of Markov bases with three capture-recapture examples. We prove that a specific lattice basis is a Markov basis for a class of models including the original model considered by Link et al. (2010) and confirm that the specific basis used in their example with two sampling occasions is a Markov basis. The constructive nature of our proof provides an immediate method to obtain a Markov basis for any model in this class.


Assuntos
Modelos Estatísticos , Dinâmica Populacional/estatística & dados numéricos , Algoritmos , Animais , Teorema de Bayes , Biometria/métodos , Modelos Lineares , Cadeias de Markov , Método de Monte Carlo
16.
Biometrics ; 70(4): 775-82, 2014 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-25311362

RESUMO

Motivated by field sampling of DNA fragments, we describe a general model for capture-recapture modeling of samples drawn one at a time in continuous-time. Our model is based on Poisson sampling where the sampling time may be unobserved. We show that previously described models correspond to partial likelihoods from our Poisson model and their use may be justified through arguments concerning S- and Bayes-ancillarity of discarded information. We demonstrate a further link to continuous-time capture-recapture models and explain observations that have been made about this class of models in terms of partial ancillarity. We illustrate application of our models using data from the European badger (Meles meles) in which genotyping of DNA fragments was subject to error.


Assuntos
DNA/genética , Genética Populacional , Modelos Estatísticos , Mustelidae/genética , Vigilância da População/métodos , Tamanho da Amostra , Animais , Simulação por Computador , DNA/análise , Interpretação Estatística de Dados , Genótipo
17.
Mol Ecol ; 23(15): 3814-25, 2014 08.
Artigo em Inglês | MEDLINE | ID: mdl-24635414

RESUMO

A major goal of gut-content analysis is to quantify predation rates by predators in the field, which could provide insights into the mechanisms behind ecosystem structure and function, as well as quantification of ecosystem services provided. However, percentage-positive results from molecular assays are strongly influenced by factors other than predation rate, and thus can only be reliably used to quantify predation rates under very restrictive conditions. Here, we develop two statistical approaches, one using a parametric bootstrap and the other in terms of Bayesian inference, to build upon previous techniques that use DNA decay rates to rank predators by their rate of prey consumption, by allowing a statistical assessment of confidence in the inferred ranking. To demonstrate the utility of this technique in evaluating ecological data, we test web-building spiders for predation on a primary prey item, springtails. Using these approaches we found that an orb-weaving spider consumes springtail prey at a higher rate than a syntopic sheet-weaving spider, despite occupying microhabitats where springtails are less frequently encountered. We suggest that spider-web architecture (orb web vs. sheet web) is a primary determinant of prey-consumption rates within this assemblage of predators, which demonstrates the potential influence of predator foraging behaviour on trophic web structure. We also discuss how additional assumptions can be incorporated into the same analysis to allow broader application of the technique beyond the specific example presented. We believe that such modelling techniques can greatly advance the field of molecular gut-content analysis.


Assuntos
DNA/análise , Ecologia/métodos , Cadeia Alimentar , Comportamento Predatório , Aranhas/fisiologia , Animais , Artrópodes , Teorema de Bayes , Ecossistema , Conteúdo Gastrointestinal , Modelos Estatísticos , Análise de Sequência de DNA
18.
Biometrics ; 69(4): 1012-21, 2013 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-24117027

RESUMO

We use Bayesian methods to explore fitting the von Bertalanffy length model to tag-recapture data. We consider two popular parameterizations of the von Bertalanffy model. The first models the data relative to age at first capture; the second models in terms of length at first capture. Using data from a rainbow trout Oncorhynchus mykiss study we explore the relationship between the assumptions and resulting inference using posterior predictive checking, cross validation and a simulation study. We find that untestable hierarchical assumptions placed on the nuisance parameters in each model can influence the resulting inference about parameters of interest. Researchers should carefully consider these assumptions when modeling growth from tag-recapture data.


Assuntos
Algoritmos , Tamanho Corporal/fisiologia , Interpretação Estatística de Dados , Modelos Estatísticos , Oncorhynchus mykiss/crescimento & desenvolvimento , Dinâmica Populacional , Vigilância da População/métodos , Animais , Simulação por Computador , Projetos de Pesquisa , Tamanho da Amostra
19.
Biometrics ; 65(3): 833-40, 2009 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-19173702

RESUMO

Sampling DNA noninvasively has advantages for identifying animals for uses such as mark-recapture modeling that require unique identification of animals in samples. Although it is possible to generate large amounts of data from noninvasive sources of DNA, a challenge is overcoming genotyping errors that can lead to incorrect identification of individuals. A major source of error is allelic dropout, which is failure of DNA amplification at one or more loci. This has the effect of heterozygous individuals being scored as homozygotes at those loci as only one allele is detected. If errors go undetected and the genotypes are naively used in mark-recapture models, significant overestimates of population size can occur. To avoid this it is common to reject low-quality samples but this may lead to the elimination of large amounts of data. It is preferable to retain these low-quality samples as they still contain usable information in the form of partial genotypes. Rather than trying to minimize error or discarding error-prone samples we model dropout in our analysis. We describe a method based on data augmentation that allows us to model data from samples that include uncertain genotypes. Application is illustrated using data from the European badger (Meles meles).


Assuntos
DNA/análise , DNA/genética , Interpretação Estatística de Dados , Ecossistema , Genética Populacional , Modelos Genéticos , Modelos Estatísticos , Densidade Demográfica , Animais , Simulação por Computador , Tamanho da Amostra
20.
J Appl Physiol (1985) ; 105(2): 555-60, 2008 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-18511524

RESUMO

We outline the use of hierarchical modeling for inference about the categorization of subjects into "responder" and "nonresponder" classes when the true status of the subject is latent (hidden). If uncertainty of classification is ignored during analysis, then statistical inference may be unreliable. An important advantage of hierarchical modeling is that it facilitates the correct modeling of the hidden variable in terms of predictor variables and hypothesized biological relationships. This allows researchers to formalize inference that can address questions about why some subjects respond and others do not. We illustrate our approach using a recent study of hepcidin excretion in female marathon runners (Roecker L, Meier-Buttermilch R, Brechte L, Nemeth E, Ganz T. Eur J Appl Physiol 95: 569-571, 2005).


Assuntos
Fisiologia/classificação , Adulto , Algoritmos , Peptídeos Catiônicos Antimicrobianos/sangue , Interpretação Estatística de Dados , Feminino , Hepcidinas , Humanos , Modelos Lineares , Modelos Estatísticos , Corrida/fisiologia , Software
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...